Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Biomed Eng Online ; 23(1): 31, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38468262

RESUMO

BACKGROUND: Ultrasound three-dimensional visualization, a cutting-edge technology in medical imaging, enhances diagnostic accuracy by providing a more comprehensive and readable portrayal of anatomical structures compared to traditional two-dimensional ultrasound. Crucial to this visualization is the segmentation of multiple targets. However, challenges like noise interference, inaccurate boundaries, and difficulties in segmenting small structures exist in the multi-target segmentation of ultrasound images. This study, using neck ultrasound images, concentrates on researching multi-target segmentation methods for the thyroid and surrounding tissues. METHOD: We improved the Unet++ to propose PA-Unet++ to enhance the multi-target segmentation accuracy of the thyroid and its surrounding tissues by addressing ultrasound noise interference. This involves integrating multi-scale feature information using a pyramid pooling module to facilitate segmentation of structures of various sizes. Additionally, an attention gate mechanism is applied to each decoding layer to progressively highlight target tissues and suppress the impact of background pixels. RESULTS: Video data obtained from 2D ultrasound thyroid serial scans served as the dataset for this paper.4600 images containing 23,000 annotated regions were divided into training and test sets at a ratio of 9:1, the results showed that: compared with the results of U-net++, the Dice of our model increased from 78.78% to 81.88% (+ 3.10%), the mIOU increased from 73.44% to 80.35% (+ 6.91%), and the PA index increased from 92.95% to 94.79% (+ 1.84%). CONCLUSIONS: Accurate segmentation is fundamental for various clinical applications, including disease diagnosis, treatment planning, and monitoring. This study will have a positive impact on the improvement of 3D visualization capabilities and clinical decision-making and research in the context of ultrasound image.


Assuntos
Imageamento Tridimensional , Glândula Tireoide , Glândula Tireoide/diagnóstico por imagem , Projetos de Pesquisa , Tecnologia , Processamento de Imagem Assistida por Computador
2.
Regen Biomater ; 11: rbad082, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38213739

RESUMO

Biomaterials with surface nanostructures effectively enhance protein secretion and stimulate tissue regeneration. When nanoparticles (NPs) enter the living system, they quickly interact with proteins in the body fluid, forming the protein corona (PC). The accurate prediction of the PC composition is critical for analyzing the osteoinductivity of biomaterials and guiding the reverse design of NPs. However, achieving accurate predictions remains a significant challenge. Although several machine learning (ML) models like Random Forest (RF) have been used for PC prediction, they often fail to consider the extreme values in the abundance region of PC absorption and struggle to improve accuracy due to the imbalanced data distribution. In this study, resampling embedding was introduced to resolve the issue of imbalanced distribution in PC data. Various ML models were evaluated, and RF model was finally used for prediction, and good correlation coefficient (R2) and root-mean-square deviation (RMSE) values were obtained. Our ablation experiments demonstrated that the proposed method achieved an R2 of 0.68, indicating an improvement of approximately 10%, and an RMSE of 0.90, representing a reduction of approximately 10%. Furthermore, through the verification of label-free quantification of four NPs: hydroxyapatite (HA), titanium dioxide (TiO2), silicon dioxide (SiO2) and silver (Ag), and we achieved a prediction performance with an R2 value >0.70 using Random Oversampling. Additionally, the feature analysis revealed that the composition of the PC is most significantly influenced by the incubation plasma concentration, PDI and surface modification.

3.
Sensors (Basel) ; 23(20)2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37896706

RESUMO

Deep learning (DL) models in breast ultrasound (BUS) image analysis face challenges with data imbalance and limited atypical tumor samples. Generative Adversarial Networks (GAN) address these challenges by providing efficient data augmentation for small datasets. However, current GAN approaches fail to capture the structural features of BUS and generated images lack structural legitimacy and are unrealistic. Furthermore, generated images require manual annotation for different downstream tasks before they can be used. Therefore, we propose a two-stage GAN framework, 2s-BUSGAN, for generating annotated BUS images. It consists of the Mask Generation Stage (MGS) and the Image Generation Stage (IGS), generating benign and malignant BUS images using corresponding tumor contours. Moreover, we employ a Feature-Matching Loss (FML) to enhance the quality of generated images and utilize a Differential Augmentation Module (DAM) to improve GAN performance on small datasets. We conduct experiments on two datasets, BUSI and Collected. Moreover, results indicate that the quality of generated images is improved compared with traditional GAN methods. Additionally, our generated images underwent evaluation by ultrasound experts, demonstrating the possibility of deceiving doctors. A comparative evaluation showed that our method also outperforms traditional GAN methods when applied to training segmentation and classification models. Our method achieved a classification accuracy of 69% and 85.7% on two datasets, respectively, which is about 3% and 2% higher than that of the traditional augmentation model. The segmentation model trained using the 2s-BUSGAN augmented datasets achieved DICE scores of 75% and 73% on the two datasets, respectively, which were higher than the traditional augmentation methods. Our research tackles imbalanced and limited BUS image data challenges. Our 2s-BUSGAN augmentation method holds potential for enhancing deep learning model performance in the field.


Assuntos
Neoplasias , Médicos , Feminino , Humanos , Ultrassonografia Mamária , Processamento de Imagem Assistida por Computador
4.
Sensors (Basel) ; 23(12)2023 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-37420617

RESUMO

Due to the heterogeneity of ultrasound (US) images and the indeterminate US texture of liver fibrosis (LF), automatic evaluation of LF based on US images is still challenging. Thus, this study aimed to propose a hierarchical Siamese network that combines the information from liver and spleen US images to improve the accuracy of LF grading. There were two stages in the proposed method. In stage one, a dual-channel Siamese network was trained to extract features from paired liver and spleen patches that were cropped from US images to avoid vascular interferences. Subsequently, the L1 distance was used to quantify the liver-spleen differences (LSDs). In stage two, the pretrained weights from stage one were transferred into the Siamese feature extractor of the LF staging model, and a classifier was trained using the fusion of the liver and LSD features for LF staging. This study was retrospectively conducted on US images of 286 patients with histologically proven liver fibrosis stages. Our method achieved a precision and sensitivity of 93.92% and 91.65%, respectively, for cirrhosis (S4) diagnosis, which is about 8% higher than that of the baseline model. The accuracy of the advanced fibrosis (≥S3) diagnosis and the multi-staging of fibrosis (≤S2 vs. S3 vs. S4) both improved about 5% to reach 90.40% and 83.93%, respectively. This study proposed a novel method that combined hepatic and splenic US images and improved the accuracy of LF staging, which indicates the great potential of liver-spleen texture comparison in noninvasive assessment of LF based on US images.


Assuntos
Fígado , Baço , Humanos , Baço/diagnóstico por imagem , Baço/patologia , Estudos Retrospectivos , Fígado/diagnóstico por imagem , Fígado/patologia , Cirrose Hepática/diagnóstico por imagem , Fibrose
5.
Ophthalmol Ther ; 12(2): 1081-1095, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36692813

RESUMO

INTRODUCTION: Compared with traditional fundus examination techniques, ultra-widefield fundus (UWF) images provide 200° panoramic images of the retina, which allows better detection of peripheral retinal lesions. The advent of UWF provides effective solutions only for detection but still lacks efficient diagnostic capabilities. This study proposed a retinal lesion detection model to automatically locate and identify six relatively typical and high-incidence peripheral retinal lesions from UWF images which will enable early screening and rapid diagnosis. METHODS: A total of 24,602 augmented ultra-widefield fundus images with labels corresponding to 6 peripheral retinal lesions and normal manifestation labelled by 5 ophthalmologists were included in this study. An object detection model named You Only Look Once X (YOLOX) was modified and trained to locate and classify the six peripheral retinal lesions including rhegmatogenous retinal detachment (RRD), retinal breaks (RB), white without pressure (WWOP), cystic retinal tuft (CRT), lattice degeneration (LD), and paving-stone degeneration (PSD). We applied coordinate attention block and generalized intersection over union (GIOU) loss to YOLOX and evaluated it for accuracy, sensitivity, specificity, precision, F1 score, and average precision (AP). This model was able to show the exact location and saliency map of the retinal lesions detected by the model thus contributing to efficient screening and diagnosis. RESULTS: The model reached an average accuracy of 96.64%, sensitivity of 87.97%, specificity of 98.04%, precision of 87.01%, F1 score of 87.39%, and mAP of 86.03% on test dataset 1 including 248 UWF images and reached an average accuracy of 95.04%, sensitivity of 83.90%, specificity of 96.70%, precision of 78.73%, F1 score of 81.96%, and mAP of 80.59% on external test dataset 2 including 586 UWF images, showing this system performs well in distinguishing the six peripheral retinal lesions. CONCLUSION: Focusing on peripheral retinal lesions, this work proposed a deep learning model, which automatically recognized multiple peripheral retinal lesions from UWF images and localized exact positions of lesions. Therefore, it has certain potential for early screening and intelligent diagnosis of peripheral retinal lesions.

6.
J Xray Sci Technol ; 31(2): 337-355, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36617768

RESUMO

BACKGROUND: Melanoma is a tumor caused by melanocytes with a high degree of malignancy, easy local recurrence, distant metastasis, and poor prognosis. It is also difficult to be detected by inexperienced dermatologist due to their similar appearances, such as color, shape, and contour. OBJECTIVE: To develop and test a new computer-aided diagnosis scheme to detect melanoma skin cancer. METHODS: In this new scheme, the unsupervised clustering based on deep metric learning is first conducted to make images with high similarity together and the corresponding model weights are utilized as teacher-model for the next stage. Second, benefit from the knowledge distillation, the attention transfer is adopted to make the classification model enable to learn the similarity features and information of categories simultaneously which improve the diagnosis accuracy than the common classification method. RESULTS: In validation sets, 8 categories were included, and 2443 samples were calculated. The highest accuracy of the new scheme is 0.7253, which is 5% points higher than the baseline (0.6794). Specifically, the F1-Score of three malignant lesions BCC (Basal cell carcinoma), SCC (Squamous cell carcinomas), and MEL (Melanoma) increase from 0.65 to 0.73, 0.28 to 0.37, and 0.54 to 0.58, respectively. In two test sets of HAN including 3844 samples and BCN including 6375 samples, the highest accuracies are 0.68 and 0.53 for HAM and BCN datasets, respectively, which are higher than the baseline (0.649 and 0.516). Additionally, F1 scores of BCC, SCC, MEL are 0.49, 0.2, 0.45 in HAM dataset and 0.6, 0.14, 0.55 in BCN dataset, respectively, which are also higher than F1 scores the results of baseline. CONCLUSIONS: This study demonstrates that the similarity clustering method enables to extract the related feature information to gather similar images together. Moreover, based on the attention transfer, the proposed classification framework can improve total accuracy and F1-score of skin lesion diagnosis.


Assuntos
Carcinoma Basocelular , Carcinoma de Células Escamosas , Melanoma , Neoplasias Cutâneas , Humanos , Sensibilidade e Especificidade , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Melanoma/diagnóstico por imagem , Melanoma/patologia , Carcinoma Basocelular/diagnóstico por imagem , Carcinoma Basocelular/patologia , Carcinoma de Células Escamosas/patologia
7.
J Xray Sci Technol ; 30(6): 1243-1260, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36155489

RESUMO

BACKGROUND: Standard planes (SPs) are crucial for the diagnosis of fetal brain malformation. However, it is very time-consuming and requires extensive experiences to acquire the SPs accurately due to the large difference in fetal posture and the complexity of SPs definitions. OBJECTIVE: This study aims to present a guiding approach that could assist sonographer to obtain the SPs more accurately and more quickly. METHODS: To begin with, sonographer uses the 3D probe to scan the fetal head to obtain 3D volume data, and then we used affine transformation to calibrate 3D volume data to the standard body position and established the corresponding 3D head model in 'real time'. When the sonographer uses the 2D probe to scan a plane, the position of current plane can be clearly show in 3D head model by our RLNet (regression location network), which can conduct the sonographer to obtain the three SPs more accurately. When the three SPs are located, the sagittal plane and the coronal planes can be automatically generated according to the spatial relationship with the three SPs. RESULTS: Experimental results conducted on 3200 2D US images show that the RLNet achieves average angle error of the transthalamic plane was 3.91±2.86°, which has a obvious improvement compared other published data. The automatically generated coronal and sagittal SPs conform the diagnostic criteria and the diagnostic requirements of fetal brain malformation. CONCLUSIONS: A guiding scanning method based deep learning for ultrasonic brain malformation screening is firstly proposed and it has a pragmatic value for future clinical application.


Assuntos
Cabeça , Ultrassonografia Pré-Natal , Gravidez , Feminino , Humanos , Ultrassonografia Pré-Natal/métodos , Encéfalo/diagnóstico por imagem , Ultrassonografia
8.
J Xray Sci Technol ; 30(5): 967-981, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35661047

RESUMO

BACKGROUND: The intelligent diagnosis of thyroid nodules in ultrasound image is an important research issue. Automatically locating the region of interest (ROI) of thyroid nodules and providing pre-diagnosis results can help doctors to diagnose faster and more accurate. OBJECTIVES: This study aims to propose a model, which can detect multiple nodules stably and accurately in order to avoid missed detection and misjudgment. In addition, the detection speed of the model needs to be fast for real-time diagnosis in ultrasound images. METHODS: Based on the object detection technology, we propose an accurate, robust and high-speed network with multiscale fusion strategy called Efficient-YOLO, which can realize the localization and recognition of nodules at the same time. Finally, multiple metrics are used to measure the diagnostic ability of the model. RESULTS: Experimental results conducted on 3,562 ultrasound images show that our new model greatly increases the accuracy and speed of the detection compared with the baseline model. The best mAP is 92.64%, and the fastest detection speed is 45.1 frames per second. CONCLUSIONS: This study proposed an effective method to diagnosis thyroid nodules automatically, which can meet the real-time requirements, indicating that its effectiveness and feasibility for future clinical application.


Assuntos
Nódulo da Glândula Tireoide , Benchmarking , Humanos , Redes Neurais de Computação , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia/métodos
9.
Biomolecules ; 12(2)2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35204741

RESUMO

The detection of Mycobacterium tuberculosis (Mtb) infection plays an important role in the control of tuberculosis (TB), one of the leading infectious diseases in the world. Recent advances in artificial intelligence-aided cellular image processing and analytical techniques have shown great promises in automated Mtb detection. However, current cell imaging protocols often involve costly and time-consuming fluorescence staining, which has become a major bottleneck for procedural automation. To solve this problem, we have developed a novel automated system (AutoCellANLS) for cell detection and the recognition of morphological features in the phase-contrast micrographs by using unsupervised machine learning (UML) approaches and deep convolutional neural networks (CNNs). The detection algorithm can adaptively and automatically detect single cells in the cell population by the improved level set segmentation model with the circular Hough transform (CHT). Besides, we have designed a Cell-net by using the transfer learning strategies (TLS) to classify the virulence-specific cellular morphological changes that would otherwise be indistinguishable to the naked eye. The novel system can simultaneously classify and segment microscopic images of the cell populations and achieve an average accuracy of 95.13% for cell detection, 95.94% for morphological classification, 94.87% for sensitivity, and 96.61% for specificity. AutoCellANLS is able to detect significant morphological differences between the infected and uninfected mammalian cells throughout the infection period (2 hpi/12 hpi/24 hpi). Besides, it has overcome the drawback of manual intervention and increased the accuracy by more than 11% compared to our previous work, which used AI-aided imaging analysis to detect mycobacterial infection in macrophages. AutoCellANLS is also efficient and versatile when tailored to different cell lines datasets (RAW264.7 and THP-1 cell). This proof-of concept study provides a novel venue to investigate bacterial pathogenesis at a macroscopic level and offers great promise in the diagnosis of bacterial infections.


Assuntos
Mycobacterium tuberculosis , Tuberculose , Animais , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Mamíferos , Redes Neurais de Computação , Tuberculose/diagnóstico
10.
Sichuan Da Xue Xue Bao Yi Xue Ban ; 52(4): 566-569, 2021 Jul.
Artigo em Chinês | MEDLINE | ID: mdl-34323032

RESUMO

Biomedical engineering (BME) (biomedical materials track) is a typical field of interdisciplinary integration. Its specialty education simultaneously undertakes the duo reformation responsibilities for the new engineering education and the new medical education due to its unique strengths in interdisciplinary nature, comprehensive scope of knowledge, and status of being on the cutting edge of technology. We made an analysis, in this paper, of the opportunities and challenges faced by BME (biomedical materials track) specialty education on the basis of the trends and frontiers of development in biomedical materials in the world. From the perspective of new requirements raised by major national strategies and industrial development for the qualifications and competence of professionals specializing in biomedical materials, thorough reflections were made on the specialized education of BME (biomedical materials track) under the background of the new engineering education and the new medical education. Furthermore, we proposed herein to reconstruct the specialized core knowledge system according to the main line of the reactions and the responses between the biomedical materials and human bodies at different levels and set up a series of courses of biomedical materials science centered on Materiobiology as the core. We also proposed to establish a diversified integrated reform model of the training system incorporating production, learning, research and application for highly competent BME (biomedical materials track) professionals. This paper attempts to contribute to the solution of the major issue of how to train the innovative talents and leaders who will pioneer a new round of diagnosis and treatment technology revolution and the development of the medical device industry.


Assuntos
Engenharia Biomédica , Universidades , Engenharia Biomédica/educação , Currículo , Humanos , Aprendizagem
11.
J Xray Sci Technol ; 29(1): 75-90, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33136086

RESUMO

BACKGROUND: Thyroid ultrasonography is widely used to diagnose thyroid nodules in clinics. Automatic localization of nodules can promote the development of intelligent thyroid diagnosis and reduce workload of radiologists. However, besides the ultrasound image has low contrast and high noise, the thyroid nodules are diverse in shape and vary greatly in size. Thus, thyroid nodule detection in ultrasound images is still a challenging task. OBJECTIVE: This study proposes an automatic detection algorithm to locate nodules in B ultrasound images and Doppler ultrasound images. This method can be used to screen thyroid nodules and provide a basis for subsequent automatic segmentation and intelligent diagnosis. METHODS: We develop and optimize an improved YOLOV3 model for detecting thyroid nodules in ultrasound images with B-mode and Doppler mode. Improvements include (1) using the high-resolution network (HRNet) as the basic network for gradually extracting high-level semantic features to reduce the missed detection and misdetection, (2) optimizing the loss function for single target detection like nodules, and (3) obtaining the anchor boxes by clustering the candidate frames of real nodules in the dataset. RESULTS: The experimental results of applying to 8000 clinical ultrasound images show that the new method developed and tested in this study can effectively detect thyroid nodules. The method achieves 94.53% mean precision and 95.00% mean recall. CONCLUTIONS: The study demonstrates a new automated method that enables to achieve high detection accuracy and effectively locate thyroid nodules in various ultrasound images without any user interaction, which indicates its potential clinical application value for the thyroid nodule screening.


Assuntos
Nódulo da Glândula Tireoide , Algoritmos , Análise por Conglomerados , Humanos , Neuroimagem , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia
12.
ACS Omega ; 5(50): 32706-32714, 2020 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-33376908

RESUMO

Pectinase is widely used in juice production, food processes, and other fields. However, owing to poor stability, free pectinase is difficult to separate from a substrate after hydrolysis and cannot be reused, and thus its industrial use is limited. Immobilized pectinase can solve these problems well. We prepared a carrier material of immobilized enzyme, which is called porous spherical reduced graphene oxide (rGO) with a rich pore structure, large specific surface area, strong hardness, and good biocompatibility to enzyme. Then, we evaluated the performance of the porous spherical rGO immobilized pectinase and characterized its structure by IR, XRD, and SEM. Using this material as a carrier of immobilized enzyme improves the load and catalytic activity of the enzyme. After 10 times of continuous use, the porous spherical rGO immobilized enzyme still maintained its initial relative enzyme activity at around 87%, indicating that immobilized pectinase had a stronger cycling stability, and its thermal stability, acid-base tolerance, and storage stability were superior to those of free pectinase. The results were compared with those of other studies on immobilized pectinase. The relative activity of pectinase immobilized by porous spherical rGO was at a high level after 10 consecutive uses. Overall, the spherical rGO is an excellent immobilized enzyme carrier material.

13.
J Xray Sci Technol ; 28(5): 905-922, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32986647

RESUMO

BACKGROUND: Automatic segmentation of individual tooth root is a key technology for the reconstruction of the three-dimensional dental model from Cone Beam Computed Tomography (CBCT) images, which is of great significance for the orthodontic, implant and other dental diagnosis and treatment planning. OBJECTIVES: Currently, tooth root segmentation is mainly done manually because of the similar gray of the tooth root and the alveolar bone from CBCT images. This study aims to explore the automatic tooth root segmentation algorithm of CBCT axial image sequence based on deep learning. METHODS: We proposed a new automatic tooth root segmentation method based on the deep learning U-net with AGs. Since CBCT sequence has a strong correlation between adjacent slices, a Recurrent neural network (RNN) was applied to extract the intra-slice and inter-slice contexts. To develop and test this new method for automatic segmentation of tooth roots using CBCT images, 24 sets of CBCT sequences containing 1160 images and 5 sets of CBCT sequences containing 361 images were used to train and test the network, respectively. RESULTS: Applying to the testing dataset, the segmentation accuracy measured by the intersection over union (IOU), dice similarity coefficient (DICE), average precision rate (APR), average recall rate (ARR), and average symmetrical surface distance (ASSD) are 0.914, 0.955, 95.8% , 95.3% , 0.145 mm, respectively. CONCLUSIONS: The study demonstrates that the new method combining attention U-net with RNN yields the promising results of automatic tooth roots segmentation, which has potential to help improve the segmentation efficiency and accuracy in future clinical practice.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Redes Neurais de Computação , Raiz Dentária/diagnóstico por imagem , Algoritmos , Humanos
14.
J Xray Sci Technol ; 28(6): 1123-1139, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32804114

RESUMO

BACKGROUND: Calcification is an important criterion for classification between benign and malignant thyroid nodules. Deep learning provides an important means for automatic calcification recognition, but it is tedious to annotate pixel-level labels for calcifications with various morphologies. OBJECTIVE: This study aims to improve accuracy of calcification recognition and prediction of its location, as well as to reduce the number of pixel-level labels in model training. METHODS: We proposed a collaborative supervision network based on attention gating (CS-AGnet), which was composed of two branches: a segmentation network and a classification network. The reorganized two-stage collaborative semi-supervised model was trained under the supervision of all image-level labels and few pixel-level labels. RESULTS: The results show that although our semi-supervised network used only 30% (289 cases) of pixel-level labels for training, the accuracy of calcification recognition reaches 92.1%, which is very close to 92.9% of deep supervision with 100% (966 cases) pixel-level labels. The CS-AGnet enables to focus the model's attention on calcification objects. Thus, it achieves higher accuracy than other deep learning methods. CONCLUSIONS: Our collaborative semi-supervised model has a preferable performance in calcification recognition, and it reduces the number of manual annotations of pixel-level labels. Moreover, it may be of great reference for the object recognition of medical dataset with few labels.


Assuntos
Calcinose/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia/métodos , Algoritmos , Humanos
15.
ACS Omega ; 5(32): 20062-20069, 2020 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-32832760

RESUMO

Pectinase is an industrially important enzyme widely used in juice production, food processing, and other fields. The use of immobilized enzyme systems that allow several reuses of pectinase is beneficial to these fields. Herein, we developed mechanically strong and recyclable porous hydroxyapatite/calcium alginate composite beads for pectinase immobilization. Under the optimal immobilization parameters of 40 °C, pH 4.0, 5.2 U/L pectinase concentration and 4 h reaction time, pectinase showed the highest enzymatic activity (8995 U/mg) and immobilization yield (91%). The thermal stability and pH tolerance of the immobilized pectinase were superior to those of free pectinase. The storage stability of the free and immobilized pectinase for 30 days retained 20 and 50% of their initial activity, respectively. Therefore, these composite beads might be promising support for the efficient immobilization of industrially important enzymes.

16.
J Digit Imaging ; 33(5): 1218-1223, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32519253

RESUMO

This study aimed to construct a breast ultrasound computer-aided prediction model based on the convolutional neural network (CNN) and investigate its diagnostic efficiency in breast cancer. A retrospective analysis was carried out, including 5000 breast ultrasound images (benign: 2500; malignant: 2500) as the training group. Different prediction models were constructed using CNN (based on InceptionV3, VGG16, ResNet50, and VGG19). Additionally, the constructed prediction models were tested using 1007 images of the test group (benign: 788; malignant: 219). The receiver operating characteristic curves were drawn, and the corresponding areas under the curve (AUCs) were obtained. The model with the highest AUC was selected, and its diagnostic accuracy was compared with that obtained by sonographers who performed and interpreted ultrasonographic examinations using 683 images of the comparison group (benign: 493; malignant: 190). In the model test with the test group images, the AUCs of the constructed InceptionV3, VGG16, ResNet50, and VGG19 models were 0.905, 0.866, 0.851, and 0.847, respectively. The InceptionV3 model showed the largest AUC, with statistically significant differences compared with the other models (P < 0.05). In the classification of the comparison group images, the AUC (0.913) of the InceptionV3 model was larger than that (0.846) obtained by sonographers, showing a statistically significant difference (P < 0.05). The breast ultrasound computer-aided prediction model based on CNN showed high accuracy in the prediction of breast cancer.


Assuntos
Neoplasias da Mama , Neoplasias da Mama/diagnóstico por imagem , Computadores , Feminino , Humanos , Redes Neurais de Computação , Estudos Retrospectivos , Ultrassonografia Mamária
17.
J Xray Sci Technol ; 27(4): 685-701, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31282468

RESUMO

BACKGROUND: Automatic detection of tumor in breast ultrasound (BUS) images is important for the subsequent image processing and has been researched for decades. However, there still lacks a robust method due to poor quality of BUS images. OBJECTIVE: To propose and test a salient object detection method for BUS images. METHODS: BUS image is preprocessed by an adaptively selective replacement and speckle reducing anisotropic diffusion (SRAD) algorithm. Then, the preprocessed image is segmented into super pixels by a simple linear iterative clustering (SLIC) algorithm to form a graph model, and the saliency of the nodes in the graph is calculated by using the absorbed time of absorbing Markov chain (AMC). Finally, the initial saliency map is optimized by the recurrent time of ergodic Markov chain (EMC) and a distance weighting formula. RESULTS: Results of the proposed method were compared both qualitatively and quantitatively with two saliency detection models. It was observed that the proposed method outperformed the comparison models and yielded the highest Accuracy value (97.49% vs. 86.63% and 90.33%) using a dataset of 1000 BUS images. CONCLUSIONS: After the adaptively selective replacement, AMC can effectively distinguish tumors from background by random walks.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Cadeias de Markov , Ultrassonografia Mamária , Algoritmos , Neoplasias da Mama/patologia , Feminino , Humanos , Modelos Teóricos
18.
J Xray Sci Technol ; 27(5): 839-856, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31306148

RESUMO

BACKGROUND: Breast cancer has the highest cancer prevalence rate among the women worldwide. Early detection of breast cancer is crucial for successful treatment and reducing cancer mortality rate. However, tumor detection of breast ultrasound (US) image is still a challenging work in computer-aided diagnosis (CAD). OBJECTIVE: This study aims to develop a novel automated algorithm for breast tumor detection based on deep learning. METHODS: We proposed a new deep learning network named One-step model which have one input and two outputs, the first one was the segmentation result and the other one was used for false-positive reduction. The proposed One-step model includes three key components: Base-net, Seg-net, and Cls-net based on Anchor Box. The model chose DenseNet to construct Base-net, the decoder part of RefineNet as Seg-net, and connected several middle layers of Base-net and Seg-net to Cls-net. From the first output acquired by Base-net and Seg-net, the model detected a series of suspicious lesion regions. Then the second output from the Cls-net was used to recognize and reduce the false-positive regions. RESULTS: Experimental results showed that the new model achieved competitive detection result with 90.78% F1 score, which was 8.55% higher than Single Shot MultiBox Detector (SSD) method. In addition, running new model is also computational efficient and has comparative cost effect as SSD. CONCLUSIONS: We established a novel One-step model which improves location accuracy by generating more precise bounding box via Seg-net and removing false targets by another object detection network (Cls-net). On the other hand, a real-time detection of tumor is achieved by sharing the common Base-net. The experimental results showed that the new model performed well on various irregular and blurred ultrasound images. As a result, this study demonstrated feasibility of applying deep learning scheme to detect breast lesions depicting on US image.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Ultrassonografia Mamária/métodos , Algoritmos , Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Mamografia , Redes Neurais de Computação
19.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 35(5): 679-687, 2018 10 25.
Artigo em Chinês | MEDLINE | ID: mdl-30370705

RESUMO

Ultrasound is the best way to diagnose thyroid nodules. To discriminate benign and malignant nodules, calcification is an important characteristic. However, calcification in ultrasonic images cannot be extracted accurately because of capsule wall and other internal tissue. In this paper, deep learning was first proposed to extract calcification, and two improved methods were proposed on the basis of Alexnet convolutional neural network. First, adding the corresponding anti-pooling (unpooling) and deconvolution layers (deconv2D) made the network to be trained for the required features and finally extract the calcification feature. Second, modifying the number of convolution templates and full connection layer nodes made feature extraction more refined. The final network was the combination of two improved methods above. To verify the method presented in this article, we got 8 416 images with calcification, and 10 844 without calcification. The result showed that the accuracy of the calcification extraction was 86% by using the improved Alexnet convolutional neural network. Compared with traditional methods, it has been improved greatly, which provides effective means for the identification of benign and malignant thyroid nodules.

20.
Exp Anim ; 67(2): 249-257, 2018 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-29332859

RESUMO

This study aimed to assess the severity of fatty liver (FL) by analyzing ultrasound radiofrequency (RF) signals in rats. One hundred and twenty rats (72 in the FL group and 48 in the control group) were used for this purpose. Histological results were the golden standard: 42 cases had normal livers (N), 30 cases had mild FL (L1), 25 cases had moderate FL (L2), 13 cases presented with severe FL (L3), and 10 cases were excluded from the study. Four RF parameters (Mean, Mean/SD ratio [MSR], skewness [SK], and kurtosis [KU] were extracted. Univariate analysis, spearman correlation analysis, and stepwise regression analysis were used to select the most powerful predictors. Receiver operating characteristic (ROC) analysis was used to compare the diagnostic efficacy of single indexes with a combined index (Y) expressed by a regression equation. Mean, MSR, SK, and KU were significantly correlated with FL grades (r=0.71, P<0.001; r=0.81, P<0.001; r=-0.79, P<0.001; and r=-0.74, P<0.001). The regression equation was Y=-4.48 + 3.20 × 10-2X1 + 3.15X2 (P<0.001), where Y=hepatic steatosis grade, X1 =Mean, and X2 =MSR. ROC analysis showed that the curve areas of the combined index (Y) were superior to simple indexes (Mean, MSR, SK, and KU) in evaluating hepatic steatosis grade, and they were 0.95 (L≥L1), 0.98 (L≥L2), and 0.99 (L≥L3). Ultrasound radiofrequency signal quantitative technology was a new, noninvasive, and promising sonography-based approach for the assessment of FL.


Assuntos
Fígado Gorduroso/diagnóstico por imagem , Fígado/diagnóstico por imagem , Ultrassonografia/métodos , Animais , Fígado Gorduroso/patologia , Fígado/patologia , Masculino , Curva ROC , Ratos Wistar , Índice de Gravidade de Doença
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...